-
Notifications
You must be signed in to change notification settings - Fork 1.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
cli: Introduce an upgrade command #2564
Conversation
Integration test results for 50e2109: success 🎉 |
Integration test results for e8db3ce: success 🎉 |
Integration test results for 218d901: fail 😕 |
This change moves resource-templating logic into a dedicated template, creates new values types to model kubernetes resource constraints, and changes the `--ha` flag's behavior to create these resource templates instead of hardcoding the resource constraints in the various templates. This is in service of #2564.
Integration test results for cf88617: fail 😕 |
Integration test results for 12741c8: fail 😕 |
Integration test results for 50f27e3: fail 😕 |
Integration test results for 14b68ed: fail 😕 |
Integration test results for ca41e5d: fail 😕 |
Integration test results for cb1e71e: success 🎉 |
Integration test results for 5ef3a0d: fail 😕 |
The `install` command errors when the deploy target contains an existing Linkerd deployment. The `upgrade` command is introduced to reinstall or reconfigure the Linkerd control plane. Upgrade works as follows: 1. The controller config is fetched from the Kubernetes API. The Public API is not used, because we need to be able to reinstall the control plane when the Public API is not available; and we are not concerned about RBAC restrictions preventing the installer from reading the config (as we are for inject). 2. The install configuration is read, particularly the flags used during the last install/upgrade. If these flags were not set again during the upgrade, the previous values are used as if they were passed this time. The configuration is updated from the combination of these values, including the install configuration itself. Note that some flags, including the linkerd-version, are omitted since they are stored elsewhere in the configurations and don't make sense to track as overrides.. 3. The issuer secrets are read from the Kubernetes API so that they can be re-used. There is currently no way to reconfigure issuer certificates. We will need to create _another_ workflow for updating these credentials. 4. The install rendering is invoked with values and config fetched from the cluster, synthesized with the new configuration.
Integration test results for e813001: fail 😕 |
Works for me, I tested:
I turned |
@grampelberg can you use |
Integration test results for 4095c7d: success 🎉 |
That removes it from the output, I've not tried your |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Overall looks (and works!) well. Had a few comments around structure, docs, and tests. 👍 🚢
@@ -0,0 +1,192 @@ | |||
package cmd |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
consider adding some tests for this file. we'll probably want integration tests as well, but that could be done in a subsequent PR.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
in the interest of unblocking the release, i'm going to do this in a followup
Integration test results for e23d85c: success 🎉 |
So the suggestion was We confirmed this works and removes the Apparently this doesn't affect the creation of new pods. But to be able to clean it up we need to add the |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looking good to me, modulo @siggy suggestions 👍
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM. Just two comments.
Maybe in a future PR, we can consider if using sub-struct to organize the *Values
and *Options
will make the validateAndBuild()
and validate()
easier to follow (or not).
// If we cannot determine whether the configuration exists, an error is returned. | ||
func linkerdConfigAlreadyExistsInCluster() (bool, error) { | ||
api, err := k8s.NewAPI(kubeconfigPath, kubeContext) | ||
func exitIfClusterExists() { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Other than the error handling, this looks very similar to the newK8s()
and fetchConfigs()
in upgrade.go
. If we can remove the dependency on upgradeOption()
, can we re-use them here?
func exitIfClusterExists() {
- kubeConfig, err := k8s.GetConfig(kubeconfigPath, kubeContext)
+ k, err := newK8s()
if err != nil {
fmt.Fprintln(os.Stderr, "Unable to build a Kubernetes client to check for configuration. If this expected, use the --ignore-cluster flag.")
fmt.Fprintf(os.Stderr, "Error: %s\n", err)
os.Exit(1)
}
- k, err := kubernetes.NewForConfig(kubeConfig)
+ _, err = fetchConfigs(k)
if err != nil {
- fmt.Fprintln(os.Stderr, "Unable to build a Kubernetes client to check for configuration. If this expected, use the --ignore-cluster flag.")
- fmt.Fprintf(os.Stderr, "Error: %s\n", err)
- os.Exit(1)
- }
-
- c := k.CoreV1().ConfigMaps(controlPlaneNamespace)
- if _, err = c.Get(k8s.ConfigConfigMapName, metav1.GetOptions{}); err != nil {
if kerrors.IsNotFound(err) {
return
}
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
one nit then 🚢
cli/cmd/install.go
Outdated
@@ -224,7 +224,7 @@ func newCmdInstall() *cobra.Command { | |||
cmd.PersistentFlags().AddFlagSet(flags) | |||
|
|||
// Some flags are not available during upgrade, etc. | |||
cmd.PersistentFlags().AddFlagSet(options.installOnlyFlagSet(pflag.ExitOnError)) | |||
cmd.PersistentFlags().AddFlagSet(options.flagSet(pflag.ExitOnError)) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
i think this is redundant with line 204: flags := options.flagSet(pflag.ExitOnError)
?
Integration test results for 7068101: success 🎉 |
Integration test results for eead6d9: success 🎉 |
CI passes but the issue isn't getting notified https://travis-ci.org/linkerd/linkerd2/jobs/514357190 |
Integration test results for 374562e: success 🎉 |
The `install` command errors when the deploy target contains an existing Linkerd deployment. The `upgrade` command is introduced to reinstall or reconfigure the Linkerd control plane. Upgrade works as follows: 1. The controller config is fetched from the Kubernetes API. The Public API is not used, because we need to be able to reinstall the control plane when the Public API is not available; and we are not concerned about RBAC restrictions preventing the installer from reading the config (as we are for inject). 2. The install configuration is read, particularly the flags used during the last install/upgrade. If these flags were not set again during the upgrade, the previous values are used as if they were passed this time. The configuration is updated from the combination of these values, including the install configuration itself. Note that some flags, including the linkerd-version, are omitted since they are stored elsewhere in the configurations and don't make sense to track as overrides.. 3. The issuer secrets are read from the Kubernetes API so that they can be re-used. There is currently no way to reconfigure issuer certificates. We will need to create _another_ workflow for updating these credentials. 4. The install rendering is invoked with values and config fetched from the cluster, synthesized with the new configuration. Signed-off-by: [email protected] <[email protected]>
The `install` command errors when the deploy target contains an existing Linkerd deployment. The `upgrade` command is introduced to reinstall or reconfigure the Linkerd control plane. Upgrade works as follows: 1. The controller config is fetched from the Kubernetes API. The Public API is not used, because we need to be able to reinstall the control plane when the Public API is not available; and we are not concerned about RBAC restrictions preventing the installer from reading the config (as we are for inject). 2. The install configuration is read, particularly the flags used during the last install/upgrade. If these flags were not set again during the upgrade, the previous values are used as if they were passed this time. The configuration is updated from the combination of these values, including the install configuration itself. Note that some flags, including the linkerd-version, are omitted since they are stored elsewhere in the configurations and don't make sense to track as overrides.. 3. The issuer secrets are read from the Kubernetes API so that they can be re-used. There is currently no way to reconfigure issuer certificates. We will need to create _another_ workflow for updating these credentials. 4. The install rendering is invoked with values and config fetched from the cluster, synthesized with the new configuration. Signed-off-by: [email protected] <[email protected]>
The
install
command errors when the deploy target contains an existingLinkerd deployment. The
upgrade
command is introduced to reinstall orreconfigure the Linkerd control plane.
Upgrade works as follows:
The controller config is fetched from the Kubernetes API. The Public
API is not used, because we need to be able to reinstall the control
plane when the Public API is not available; and we are not concerned
about RBAC restrictions preventing the installer from reading the
config (as we are for inject).
The install configuration is read, particularly the flags used during
the last install/upgrade. If these flags were not set again during the
upgrade, the previous values are used as if they were passed this time.
The configuration is updated from the combination of these values,
including the install configuration itself.
Note that some flags, including the linkerd-version, are omitted
since they are stored elsewhere in the configurations and don't make
sense to track as overrides..
The issuer secrets are read from the Kubernetes API so that they can
be re-used. There is currently no way to reconfigure issuer
certificates. We will need to create another workflow for
updating these credentials.
The install rendering is invoked with values and config fetched from
the cluster, synthesized with the new configuration.
Fixes #2556